Generating and using normal maps in Maya
(some sample shaders and scenes)
I recently I came upon this website www.crytek.de where they showcase a really nice plugin for Max, seemingly using normal maps to add visual detail to low poly models. This is done by converting the surface information of the high resolution version of the model into a normal map, which is then used at render (here real-time) on the low poly version (screenshots here: http://crytek.com/screenshots/index.php?sx=polybump&px=poly_04.jpg)
A normal map is close to a bump map. Though a bump map returns a scalar value that indicate perturbation of the surface along the it's original normal, whereas a normal map return a vector (by using rgb color) which will can replace or be added to the orginal surface normal.
It's easy to generate a view or map of an objects normal using false colors (red for x, green for y, blue for z). this is quite useful to create « 2D surface previews » for texturing in a 2D paint package, when you don't have access to a 3D paint.
|
Exemple scene : display_object_normals.ma |
|
|
Note : the value returned from the samplerInfo node is expressed in camera space. To be most useful a normal map is better not be camera dependant nor dependant of where the object sits in space. So I use the camera's (here « persp ») worldMatrix to transform the output of the sampler info and obtain normals in world space, then the object's worldInverseMatrix to express them in object space. It would theorically be cleaner to obtain this result using samplerInfo.matrixEyeToWorld, as connecting the camera directly to a shading network cause it to be reevaluated each time you change the viewpoint in the interractive window, but samplerInfo.matrixEyeToWorld still seems to be quite buggy at the moment.
Finally, the obtained normals will have coordinates x, y and z ranging from -1 to 1. To express them in the 0->1 color space of red, green and blue, I have to use a setRange before I output the to the color channel of my shader.
If I understood well, the idea behind crytek's plugin is to use the normal map generated by the high resolution model on the low poly version. The results were looking so nice I decided to see if it was possible to pull up some stunt using a Maya shading network.
Luckily Alias released recently a free plugin collection for polygons that you can find on their website. I included the one I'm specifically using :
Thanks to this nice plugin (though quite a slow node to evaluate), for each sampled point of the low poly model, I can look up to the closest point of the high resolution version (thus low poly and high poly models must sit in same place in place during the normal map texture creation) and get the value of the normal at this point.
|
Exemple scene : torso_baking_normal_map.ma |
|
To use this normal I need to convert it into camera space as it is originally recovered in world space. I had to do some dirty connections again, using camera worldMatrix and object worldMatrix. The closestPointOnMesh is REAL slow , so don't try to move the camera or the object, especially with the hypershade open!!
Both models, low poly and high poly are standing on the same point, high poly is hidden and provides normal information to low poly one.
The scene isn't workable as it is, all you want to do is convert the normal information to a file texture. For that you'll need to select the low poly model (which must have proper UVs btw), then the FIRST setRange node (that will generate proper 0 to 1 values) and use the convert to file texture command from the hypershade menu. Try at at very low resolution first as it it REAL slow. Better to check « anti-alias » out too as it slow calculation quite a lot and doesn't have much impact on this type of maps.
|
|
When the convert is over (it is really slow in Maya, even for fairly simple shading networks, so start it at the end of the day and allow a few hours for a model of same complexity that the one I used), you'll recover a funky colored image like this one: |
|
When converting to file texture in Maya, a recurrent problem is that the evaluation will stop excatly at UV borders, then you get artefacts at UV seams as the shader will interpolate UV border pixels of the map with background ones. You'll have to correct the normal map with a 2D paint, so that colors « bleed » a bit over UV borders (same as what is done when painting 2D maps on UV layouts): |
|
Then now you can use it as a normal map, using only the last part of the shading network to restore correct -1 to 1 range, and express normals in camera space. It is now fast to render, and as you can see I think it works quite well given the very low number of faces on the low poly object (254 compared to12850 on the high poly one):
|
Exemple scene: torso_normal_map_baked.ma |
|
|
This technique can be very useful when preparing LOD objects from a high resolution one, for games or rendering optimisation, while keeping a maximum of visual similarity. Also as a final point, the best thing for game developpers is that graphics boards of the latest generation can support real-time normal maps too!
|
|
|
|
Note : I got questions from several people asking wether it was sort of camera map and thus a « one view » only hack. No! The normals in the normal map are expressed in object space, thus they do not depend on the position of the object or camera in space.
|
|
|
|
|
|
|
|
We can observe here it still works quite well with vertex level deformations like skinning or blend shapes (5 successive frames of skeleton animation and one "stretch" blend shape.
However, freezing or deforming the object on vertex level will mess them to some extent (though using a texture reference object is a possible workaround). I'm working now on a way to express normals in face or point space (using tangeantU, tangeantV and Normal to build a « point space »), for maximum adaptability to vertex level deformation. I'd like to keep it for now as a shading node, but some tools for operation I need to do (like inverting a Matrix) are present in API but not as nodes as far as I know, any suggestion and help appreciated.
Olivier Renouard